# Reading Comprehension Optimization

Modernbert Base Squad2 V0.2
Apache-2.0
QA model fine-tuned from ModernBERT-base-nli, supporting long-context processing
Question Answering System Transformers
M
Praise2112
42
2
Bert Base Chinese Finetuned Squadv2
This model is a fine-tuned version of bert-base-chinese on the Chinese SQuAD v2.0 dataset, specifically designed for Chinese QA tasks, supporting both answerable and unanswerable questions.
Question Answering System Transformers
B
real-jiakai
33
1
Roberta Finetuned Subjqa Movies 1
A QA model fine-tuned from deepset/roberta-base-squad2, specialized in movie-related QA tasks
Question Answering System Transformers
R
skandavivek2
15
0
Electra Squad Training
Apache-2.0
ELECTRA-small model fine-tuned on SQuAD dataset for question answering tasks
Question Answering System Transformers
E
mlxen
20
0
Electra Contrastdata Squad
Apache-2.0
This model is a fine-tuned version of the ELECTRA-small discriminator based on the SQuAD dataset, suitable for question-answering tasks.
Question Answering System Transformers
E
mlxen
19
0
Bart Base Few Shot K 256 Finetuned Squad Seed 2
Apache-2.0
A question-answering model based on the BART-base architecture, fine-tuned on the SQuAD dataset, suitable for few-shot learning scenarios
Question Answering System Transformers
B
anas-awadalla
13
0
Bart Base Few Shot K 256 Finetuned Squad Seed 0
Apache-2.0
This model is a fine-tuned version of facebook/bart-base on the SQuAD dataset, suitable for question-answering tasks.
Question Answering System Transformers
B
anas-awadalla
13
0
Bart Base Few Shot K 128 Finetuned Squad Seed 2
Apache-2.0
BART-base model fine-tuned on SQuAD dataset, suitable for QA tasks
Question Answering System Transformers
B
anas-awadalla
13
0
Bart Base Few Shot K 64 Finetuned Squad Seed 0
Apache-2.0
A fine-tuned version of the facebook/bart-base model on the SQuAD dataset, suitable for question-answering tasks
Question Answering System Transformers
B
anas-awadalla
13
0
Roberta Base Few Shot K 128 Finetuned Squad Seed 42
MIT
A QA model fine-tuned on the SQuAD dataset using few-shot learning based on RoBERTa-base
Question Answering System Transformers
R
anas-awadalla
19
0
Roberta Base Few Shot K 256 Finetuned Squad Seed 6
MIT
A question-answering model fine-tuned on the SQuAD dataset based on RoBERTa-base, suitable for reading comprehension tasks
Question Answering System Transformers
R
anas-awadalla
20
0
Roberta Base Few Shot K 1024 Finetuned Squad Seed 4
MIT
A QA model fine-tuned on the SQuAD dataset based on RoBERTa-base, suitable for reading comprehension tasks
Question Answering System Transformers
R
anas-awadalla
19
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase